A Sustainable Multi-Modal Multi-Layer Emotion-Aware Service at the Edge

نویسندگان

چکیده

Limited by the computational capabilities and battery energy of terminal devices network bandwidth, emotion recognition tasks fail to achieve good interactive experience for users. The intolerable latency users also seriously restricts popularization applications in edge environments such as fatigue detection auto-driving. development computing provides a more sustainable solution this problem. Based on computing, article proposes multi-modal multi-layer emotion-aware service (MULTI-EASE) architecture that considers user’s facial expression voice data source recognition, employs intelligent terminal, server cloud execution environment. By analyzing average delay each task consumption at mobile device, we formulate delay-constrained minimization problem perform scheduling policy between multiple layers reduce end-to-end using an edge-based approach, further improve users’ saving computing. Finally, prototype system is implemented validate MULTI-EASE, experimental results show MULTI-EASE efficient platform analysis applications, provide valuable reference dynamic under architecture.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Technology-aware multi-domain multi-layer routing

Transporting Big Data requires high-speed connections between end-hosts. Research and educational networks typically are stateof-the-art networks that facilitate such high-speed user-created network connections, possibly spanning multiple domains. However, there are many different high-speed optical data plane standards and implementations, and vendors do not always create compatible data plane...

متن کامل

Toward Multi-modal Music Emotion Classification

The performance of categorical music emotion classification that divides emotion into classes and uses audio features alone for emotion classification has reached a limit due to the presence of a semantic gap between the object feature level and the human cognitive level of emotion perception. Motivated by the fact that lyrics carry rich semantic information of a song, we propose a multi-modal ...

متن کامل

Towards Efficient Multi-Modal Emotion Recognition

The paper presents a multi‐modal emotion recognition system exploiting audio and video (i.e., facial expression) information. The system first processes both sources of information individually to produce corresponding matching scores and then combines the computed matching scores to obtain a classification decision. For the video part of the system, a novel ...

متن کامل

Multi-modal Emotion Recognition – More "cognitive" Machines

Based on several results related to studies on emotions, we suggest that the process of emotionrecognition is assisted by some internal structure of the cognitive images of emotions, which are at different levels of knowledge representation. We concede that the main proposed in psychology models are in correspondence with these levels and in this sense complementary. We summarize the state-of-t...

متن کامل

Boosting for Multi-Modal Music Emotion Classification

With the explosive growth of music recordings, automatic classification of music emotion becomes one of the hot spots on research and engineering. Typical music emotion classification (MEC) approaches apply machine learning methods to train a classifier based on audio features. In addition to audio features, the MIDI and lyrics features of music also contain useful semantic information for pred...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE transactions on sustainable computing

سال: 2022

ISSN: ['2377-3790', '2377-3782']

DOI: https://doi.org/10.1109/tsusc.2019.2928316